Content 360 2025 Singapore
Is the latest 'Ghibli' trend a leap for OpenAI's facial recognition capability?

Is the latest 'Ghibli' trend a leap for OpenAI's facial recognition capability?

share on

Artificial Intelligence (AI) is evolving at an incredible pace, with the latest development being the move from text-based learning to image-based learning. This is fuelled by the "Ghiblification" of personal images uploaded into ChatGPT where the AI detects and learns faces to generate a photo in the style of Japanese animation studio, Studio Ghibli. 

As the trend gains popularity, one question has yet to cross the mind of users: What happens to the uploaded photos? 

Whether users are aware of it or not, by participating in the trend, users are handing over their data to the AI company. "When you upload photos to these platforms, they're not just being processed for your immediate Studio Ghibli conversion - they're potentially being stored, analysed, and incorporated into training datasets that improve the AI's capabilities," Lierence Li, managing director of Market Hubs told MARKETING-INTERACTIVE.

"Your personal photos, which may contain your face, family members, home location, or other identifiable information, could become part of the AI's learning material. You need to carefully evaluate whether the momentary enjoyment of a stylised image is worth the potential long-term privacy implications," Li added.

This presents risks such as uploaded images being used to generate misleading content, data collected used for personalised ads or even sold to third parties, said Guo-You Chew, managing director, APAC, Tommy.

Don't miss: Study: APAC consumers embrace AI, but anxious about data privacy 

What is the 'Ghibli' trend? 

Over the weekend, "Ghibli" images took social media by storm. The trend saw AI-generated images created in the style of Japanese animator, filmmaker and manga artist Hayao Miyazaki. Miyazaki is best known as the co-founder of Studio Ghibli, the animation studio behind cult-classics My Neighbour Totoro, Spirited Away and Princess Mononoke amongst others. 

The trend saw users generating "Ghibilified" images of public figures such as US president Donald Trump and iconic scenes across pop-culture. It didn't take long for users to then start uploading personal images - wedding photos, selfies, family pictures and adorable pets - into ChatGPT to turn their precious memories into Studio Ghibli moments. 

OpenAI CEO Sam Altman kicked off the trend when he announced the newest version of images in ChatGPT on 26 March. He later changed his X profile picture to a "Ghiblified" self-portrait. According to the CEO, millions of users have since flocked to the AI platform to generate images of their own. Checks by MARKETING-INTERACTIVE later revealed that the ChatGPT image generation feature has been rolled out to all free users today (1 April). 

Is AI the problem? 

While the idea of an AI learning and recognising faces of loved ones sounds scary, industry professionals MARKETING-INTERACTIVE spoke to overwhelmingly agree that this isn't an AI or ChatGPT problem. Rather, users have signed up for it.

This is especially since everything has been spelled out in the company's terms. "Majority of users don't actually read the fine print before using these platforms. Platforms such as GPT explicitly state in its privacy policy that collected personal data will be used to train their AI models when users don't opt out," said Chew. 

"These risks are really down to the individual to assess, especially if the platforms being used are free. You could very well end-up paying with your data," he added. 

As such, users are encouraged to always read the fine print. "A platform's job is to offer a service, ideally with transparency and clear boundaries. If users then accept that, they can't blame it on the platform," said Dominique Rose Van-Winther, chief AI evangelist, Final Upgrade. 

"ChatGPT has never been secretive about its data practices — they’re in the documentation," stated Van-Winther, adding that: 

The bottom line? Privacy isn’t something that gets taken away — it’s something you give away, usually one 'I accept' click at a time. 

With terms clearly written out, the responsibility lies with the individual user. "The fundamental question is whether the service provided is worth the data you're giving up. Being conscious about these tradeoffs is essential as AI becomes more integrated into our daily lives. Companies can build trust by being transparent, but users must also take responsibility for their digital footprint," said Market Hubs' Li. 

Maintaining transparency and trust

According to Li, AI companies should seek explicit consent from users before using uploaded images for training. While such terms have been put in place, AI companies cannot assume that users have read the terms when they use such applications. "This 'implied content' model is problematic because most users don't read the lengthy terms of service or privacy policies," said Li. 

In fact, Li is of the opinion that companies should implement clearer, more direct consent mechanisms that specifically highlight how user-provided content will be used for AI training purposes. 

"People should protect their data, regardless of whether they're using social media or AI tools - there's really no difference. In today's digital landscape, the convenience these platforms offer is directly traded for your privacy, and users need to be conscious of this transaction," explained Li. 

Similarly, Votee AI CTO Jacky Chan said that transparency is a major challenge. While OpenAI outlines data sources, the specifics of the training data remain opaque, Chan explains. This lack of transparency leads to concerns called "open washing". "Open washing" is the act of presenting as open without truly auditable processes or data sources.

Chan then draws comparisons to Mozilla's Open Source AI where the platform offers a contrast, advocating for auditable and transparent systems. Moving forward, platforms must balance user engagement with fundamental privacy rights. 

This involves clear data use policies to explicitly state which data is used for training and under what conditions to ensure clarity for users, said Chan. In addition, AI companies must provide robust and accessible user options to allow users to control their data, especially regarding training. This can come in the form of an explicit opt-in or opt-out option. 

Moving beyond policy statements, AI companies can strive for genuine transparency by providing greater auditability regarding training data and model behaviour. To build real trust, companies must adhere strictly to local law compliance and regulations. 

Join us this coming 23 - 24 April for #Content360, a two-day extravaganza centered around three core thematic pillars: Challenging The Norm; Technology For Transformation; and Unlocking Imagination. Immerse yourself in learning to curate content with creativity, critical thinking, and confidence with us at Content360!

Related articles:  
Is AI really helping marketers, or is its inauthenticity scaring audiences away? 
ChatGPT's parent company OpenAI to open SG office amidst regional expansion
Goku AI for dummies: 101 on how marketers can revolutionise content creation  

share on

Follow us on our Telegram channel for the latest updates in the marketing and advertising scene.
Follow

Free newsletter

Get the daily lowdown on Asia's top marketing stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.

subscribe now open in new window